EN FR
EN FR


Section: Scientific Foundations

Sensors and information processing

Participants : Fawzi Nashashibi, Benjamin Lefaudeux, André Ducrot, Jianping Xie, Laurent Bouraoui, Paulo Lopes Resende, Hao Li.

Sensors and single-sensor information processing

The first step in the design of a control system are sensors and the information we want to extract from them, either for driver assistance or for fully automated guided vehicles. We put aside the proprioceptive sensors, which are rather well integrated. They give information on the host vehicle state, such as its velocity and the steering angle information. Thanks to sensor data processing, several objectives can be reached. The following topics are some applications validated or under development in our team:

  • localization of the vehicle with respect to the infrastructure, i.e. lateral positioning on the road can be obtained by mean of vision (lane markings) or by mean of magnetic, optic or radar devices;

  • detection and localization of the surrounding vehicles and determination of their behavior can be obtained by a mix of vision, laser or radar based data processing;

  • detection of obstacles other than vehicles (pedestrians, animals objects on the road, etc.) that requires multisensor fusion techniques;

  • simultaneous localization and mapping as well as mobile object tracking using a generic and robust laser based SLAMMOT algorithm.

Since INRIA is very involved in image processing, range imaging and multisensor fusion, IMARA emphasizes vision techniques, particularly stereo-vision, in relation with Kumamoto Lab (Japan), LITIS (Rouen) and Mines ParisTech.

Disparity Map Estimation

Participants : Laurent Bouraoui, André Ducrot, Fawzi Nashashibi, Hao Li, Benjamin Lefaudeux.

In a quite innovative approach presented in last year's report, we developed the Fly Algorithm, an evolutionary optimisation applied to stereovision and mobile robotics. Although successfully applied to real-time pedestrian detection using a vehicle mounted stereohead (see LOVe project), this technique couldn't be used for other robotics applications such as scene modeling, visual SLAM, etc. The need is for a dense 3D representation of the environment obtained with an appropriate precision and acceptable costs (computation time and resources).

Stereo vision is a reliable technique for obtaining a 3D scene representation through a pair of left and right images and it is effective for various tasks in road environments. The most important problem in stereo image processing is to find corresponding pixels from both images, leading to the so-called disparity estimation. Many autonomous vehicle navigation systems have adopted stereo vision techniques to construct disparity maps as a basic obstacle detection and avoidance mechanism.

We also worked in the past on an original approach for computing the disparity field by directly formulating the problem as a constrained optimization problem in which a convex objective function is minimized under convex constraints. These constraints arise from prior knowledge and the observed data. The minimization process is carried out over the feasibility set, which corresponds to the intersection of the constraint sets. The construction of convex property sets is based on the various properties of the field to be estimated. In most stereo vision applications, the disparity map should be smooth in homogeneous areas while keeping sharp edges. This can be achieved with the help of a suitable regularization constraint. We propose to use the Total Variation information as a regularization constraint, which avoids oscillations while preserving field discontinuities around object edges.

The algorithm we are developing to solve the estimation disparity problem has a block-iterative structure. This allows a wide range of constraints to be easily incorporated, possibly taking advantage of parallel computing architectures. This efficient algorithm allowed us to combine the Total Variation constraint with additional convex constraints so as to smooth homogeneous regions while preserving discontinuities.

Finally, we are currently working on an original stereo-vision based SLAM technique based on the detection and the registration of interest keypoints. The system is supposed to perform 3D mapping but also a 3D localization of the ego vehicle using Monte Carlo and RANSAC techniques.

Cooperative Multi-sensor data fusion

Participants : Fawzi Nashashibi, Yann Dumortier, André Ducrot, Jianping Xie, Laurent Bouraoui, François Charlot, Hao Li.

Advanced Driver Assistance System (ADAS) and Cybercars applications are moving towards vehicle-infrastructure cooperation. In such scenario, information from vehicle based sensors, roadside based sensors and a priori knowledge is generally combined thanks to wireless communications to build a probabilistic spatio-temporal model of the environment. Depending on the accuracy of such model, very useful applications from driver warning to fully autonomous driving can be performed.

IMARA has developed a framework for data acquisition, spatio-temporal localization and data sharing. Such system is based on a methodology for integrating measures from different sensors in a unique spatio-temporal frame provided by GPS receivers/WGS-84. Communicant entities, i.e. vehicles and roadsides exhibit and share their knowledge in a database using network access. Experimental validation of the framework was performed by sharing and combining raw sensor and perception data to improve a local model of the environment. Communication between entities is based on WiFi ad-hoc networking using the Optimal Link State Routing (OLSR) algorithm developed by the HIPERCOM research project at INRIA.

The Collaborative Perception Framework (CPF) is a combined hardware/software approach that permits to see remote information as its own information. Using this approach, a communicant entity can see another remote entity software objects as if it was local, and a sensor object, can see sensor data of others entities as its own sensor data. Last year's developments permitted the development of the basic hardware pieces that ensures the well functioning of the embedded architecture including perception sensors, communication devices and processing tools. The final architecture was relying on the SensorHub presented in year 2010 report. This year, we focused on the development of applications and demonstrators using this unique architecture:

  • A canonical application was developed to demonstrate the ability of platooning using vehicle-to-vehicle communications to exchange vehicles absolute positions provided by respective GPS receivers. This approach was presented at the ITS World Congress under the form of a cooperative driving demonstration with communicant vehicles. This demonstration was also the context of an international collaboration involving our team, the robotics center of ENSMP and the SwRI (see Section  8.3 ).

  • A similar demonstration was presented in the context of the international workshop on “The automation for urban transport” that was held in the french city of La Rochelle. Here three Cycabs have shown platooning capacities and demonstrated the ability of supervising collision free insertion at an intersection. The Intersection Collision Warning System (ICWS) application was built here on top of CPF to warn a driver in case of potential accident. It relies on precise spatio-temporal localization of entities and objects to compute the Time To Collision (TTC) variables but also on a “Control Center” that collects the vehicles positions and sends back to them the appropriate instructions and speed profiles.

  • In the context of the HAVEit project we have developed a vehicle-to-vehicle and infrastructure-to-vehicle communication system capable of providing relevant data to the Co-pilot system (also developed by INRIA). This data is processed in order to be taken into account in the manoeuvre planning and the trajectory planning algorithms.

  • The use of vehicle-to-vehicle communications allows an improved on-board reasoning since the decision is made based on an extended perception.

Finally, since vehicle localization (ground vehicles) is an important task for intelligent vehicle systems, vehicle cooperation may bring benefits for this task. A new cooperative multi-vehicle localization method using split covariance intersection filter is under investigation and development and first results are now available. In this approach, each vehicle computes its own position thanks to its own sensors. It maintains an estimate of a decomposed group state and this estimate is shared with neighboring vehicles; the estimate of the decomposed group state is updated with both the sensor data of the ego-vehicle and the estimates sent from other vehicles; the covariance intersection filter which yields consistent estimates even facing unknown degree of inter-estimate correlation has been used for data fusion.

Associated projects: Sharp, Icare, Complex.